8 ways to spot AI in publishing—that once you know…

O

Oz Osbaldeston

Written by

Published Date

Featured image for article: 8 ways to spot AI in publishing—that once you know…

It’s no secret that AI tools like ChatGPT and Claude are being widely used in the creative industries and, in particular, for written submissions. At Dapple, we receive thousands of submissions through our platform, and we can see firsthand the growing concern of our users. AI is not just a looming threat; it’s already here. You can rail against it, put warnings on your website or add a checkbox on your form: “I confirm that AI wasn’t used in the creation of this work,” but it won’t solve the problem - how many people seriously tick this box!? As LLMs become more ubiquitous, powerful, and the technological barriers to using them are lessened, the problem is only going to get worse. So what’s the best way to protect against AI-generated work being submitted to your organisation? 

There are essentially three ways:

  1. Use a combination of deterrents such as those listed above. 
  2. Use AI detection tools to identify any suspicious submissions. 
  3. Use human intuition to spot the culprits. 

Whilst a combination of all three is the best way to build up your armoury, this article will focus on the third point: How to spot the tell-tale signs of AI being used in work…Did you catch the one in the title of this piece?

Spotting AI-Generated Text

As AI tools grow more sophisticated, it’s getting harder to tell machine-made from human-made. But there are still patterns and curious quirks that give the game away. Here’s what to look for:

1. Repetitive patterns – Humans vary sentence length, tone, and rhythm instinctively. It’s what makes us human - an innate characteristic evolved over thousands of years. AI, however, often repeats the same phrase structures or relies on a handful of go-to connectors (“Additionally…”, “Moreover…”). Reusing these phrases and sentences leads to similar points without advancing the argument, resulting in content feeling padded and fluffy and predictable. More noticeable in longer form writing, it will start with that nagging sense that something just feels a little ‘off’. 

2. Abrupt shifts in style – A paragraph that starts like a blog post but suddenly reads like a Wikipedia entry is a tell-tale sign that the model is pulling in content from different sources and failing in the execution. 

3. Surface-level ideas – AI lacks lived experience. To quote Robin Williams' character in Good Will Hunting, talking to the main character Will (essentially a walking LLM),  AI might give you “the skinny on every art book ever written. Michelangelo…life's work, political aspirations” But when it comes to that deeper level: “I'll bet you can't tell me what it smells like in the Sistine Chapel. You've never actually stood there and looked up at that beautiful ceiling.” AI is the same. It can regurgitate, but it can’t feel. Text that seems formulaic, generic, or missing that emotional nuance can often be a red flag.

4. Buzzword overload – Overuse of trendy terms without depth can indicate filler from an AI model. Renowned lexicographer and wordsmith Susie Dent (of Countdown fame) recently pointed out that words such as 'delve', 'transformative', 'dynamic', 'navigating' and 'multifaceted', and phrases such as 'rich tapestry', 'embarking on a journey' and 'game changer' are indicators that AI has been used to generate text. “AI absolutely loves the jargon that we are all used to using,” Dent added. 

5. Too-perfect grammar – Despite the plethora of tools out there to help with spelling and grammar, “To err is human”, as Pope once mused, and humans make mistakes. A fully polished work can be a red flag. We also like to break rules for emphasis and style; AI often doesn’t. A flawless piece with no contractions or quirks, no syntactic somersaults that don’t stick the landing, well, that might not be human. There are also a few classic tell-tale signs, like the frequent use of the em dash ( — ). That being said, the fanfare this little punctuation mark has received has made many wise to its usage, and those wise to this will likely de-em their work.

6. Hallucinated facts – AI can invent convincing but false details. There are countless examples (often quite comical, some even deadly) about how AI invents legal precedents, lists a food bank as a tourist destination, or even just gets it plain wrong - see Google Bard’s first public demo and its misinformation about telescopes. If a fact seems surprising, check it with credible sources.

7. Outdated facts - Some AI models don’t have real-time access to the internet and cannot provide up-to-date information or comment on current events. If a creator is using a model with a cut-off, it’s likely extrapolating from patterns it learned before this point and might not be accurate. Many tools now have access to internet search tools and can understand uploaded documents to be “brought up to speed”, even on events that happened after their initial training, but still look for markers that might not be current.

8. Odd placeholders – “Insert name here” or mismatched details can appear when an AI lacks specifics. Some users will hit the copy button without actually reading what it is they’re being fed. These are a dead giveaway. Similar to this are attempts to use existing tools to try and bypass any plagiarism or AI checkers, which can often result in strange phrases or words. Many students, for example, use tools like Quillbot to take existing academic essays and “paraphrase” them. This can result in phrases being “translated” literally and therefore losing all meaning from the context in which they were originally written. Again, a lax review of the generated work and a hasty paste can leave a creator with the proverbial AI-generated egg on their face.


In Summary

The best defence against AI deception is a mix of awareness, verification, and intuition. Whilst we see many attempts to protect against AI-generated content, such as stark warnings, checkboxes and AI detection tools (coming soon to Dapple too), the best defence is, and probably always will be you, dear reader. 

Got 15 minutes?

We know you're busy, but we'd love to save you time. Schedule a demo with our sales team and see how Dapple can help.

Book a demo